Active galactic nuclei (AGN) are supermassive black holes with luminous accretion disks found in some galaxies, and are thought to play an important role in galaxy evolution. However, traditional optical spectroscopy for identifying AGN requires time-intensive observations. We train a convolutional neural network (CNN) to distinguish AGN host galaxies from non-active galaxies using a sample of 210,000 Sloan Digital Sky Survey galaxies. We evaluate the CNN on 33,000 galaxies that are spectrally classified as composites, and find correlations between galaxy appearances and their CNN classifications, which hint at evolutionary processes that affect both galaxy morphology and AGN activity. With the advent of the Vera C. Rubin Observatory, Nancy Grace Roman Space Telescope, and other wide-field imaging telescopes, deep learning methods will be instrumental for quickly and reliably shortlisting AGN samples for future analyses.
translated by 谷歌翻译
高分辨率光触觉传感器越来越多地用于机器人学习环境中,因为它们能够捕获与试剂环境相互作用直接相关的大量数据。但是,由于触觉机器人平台的高成本,专业的仿真软件以及在不同传感器之间缺乏通用性的模拟方法,因此在该领域的研究障碍很高。在这封信中,我们将触觉健身房的模拟器扩展到两种最受欢迎​​的类型类型的三个新的光学触觉传感器(Tactip,Digit和Digitac),分别是Gelsight Style(基于图像遮蔽)和Tactip Style(基于标记)。我们证明,尽管实际触觉图像之间存在显着差异,但可以与这三个不同的传感器一起使用单个SIM到实现的方法,以实现强大的现实性能。此外,我们通过将其调整为廉价的4道机器人组来降低对拟议任务的进入障碍,从而进一步使该基准的传播。我们在三个需要触摸感的身体相互交互的任务上验证了扩展环境:对象推动,边缘跟随和表面跟随。我们实验验证的结果突出了这些传感器之间的一些差异,这可能有助于未来的研究人员选择并自定义触觉传感器的物理特征,以进行不同的操纵场景。
translated by 谷歌翻译
深度学习与高分辨率的触觉传感相结合可能导致高度强大的灵巧机器人。但是,由于专业设备和专业知识,进度很慢。数字触觉传感器可使用Gelsight型传感器提供低成本的高分辨率触摸。在这里,我们将数字定制为基于柔软仿生光学触觉传感器的Tactip家族具有3D打印的传感表面。 Digit-Tactip(Digitac)可以在这些不同的触觉传感器类型之间进行直接比较。为了进行此比较,我们引入了一个触觉机器人系统,该机器人系统包括桌面臂,坐骑和3D打印的测试对象。我们将触觉伺服器控制与Posenet深度学习模型一起比较数字,Digitac和Tactip,以在3D形状上进行边缘和表面跟随。这三个传感器在姿势预测上的性能类似,但是它们的构造导致伺服控制的性能不同,为研究人员选择或创新触觉传感器提供了指导。复制此研究的所有硬件和软件将公开发布。
translated by 谷歌翻译
在不完整的数据集中对样本进行分类是机器学习从业人员的普遍目的,但并非平凡。在大多数现实世界数据集中发现缺失的数据,这些缺失值通常是使用已建立的方法估算的,然后进行分类现在完成,估算的样本。然后,机器学习研究人员的重点是优化下游分类性能。在这项研究中,我们强调必须考虑插补的质量。我们展示了如何评估质量的常用措施有缺陷,并提出了一类新的差异评分,这些分数着重于该方法重新创建数据的整体分布的程度。总而言之,我们强调了使用不良数据训练的分类器模型的可解释性损害。
translated by 谷歌翻译
水生运动是生物学家和工程师感兴趣的经典流体结构相互作用(FSI)问题。求解完全耦合的FSI方程,用于不可压缩的Navier-Stokes和有限的弹性在计算上是昂贵的。在这种系统中,优化机器人游泳器设计通常涉及在已经昂贵的模拟之上繁琐的,无梯度的程序。为了应对这一挑战,我们提出了一种针对FSI的新颖,完全可区分的混合方法,该方法结合了2D直接数值模拟,用于游泳器的可变形固体结构和物理受限的神经网络替代物,以捕获流体的流体动力效应。对于游泳者身体的可变形实心模拟,我们使用来自计算机图形领域的最新技术来加快有限元方法(FEM)。对于流体模拟,我们使用经过基于物理损耗功能的U-NET体系结构来预测每个时间步骤的流场。使用沉浸式边界方法(IBM)在我们游泳器边界的边界周围采样了来自神经网络的压力和速度场输出,以准确有效地计算其游泳运动。我们证明了混合模拟器在2D Carangiform游泳器上的计算效率和可不同性。由于可怜性,该模拟器可用于通过基于直接梯度的优化浸入流体中的软体体系的控件设计。
translated by 谷歌翻译
深度学习(DL)模型为各种医学成像基准挑战提供了最先进的性能,包括脑肿瘤细分(BRATS)挑战。然而,局灶性病理多隔室分割(例如,肿瘤和病变子区)的任务特别具有挑战性,并且潜在的错误阻碍DL模型转化为临床工作流程。量化不确定形式的DL模型预测的可靠性,可以实现最不确定的地区的临床审查,从而建立信任并铺平临床翻译。最近,已经引入了许多不确定性估计方法,用于DL医学图像分割任务。开发指标评估和比较不确定性措施的表现将有助于最终用户制定更明智的决策。在本研究中,我们探索并评估在Brats 2019-2020任务期间开发的公制,以对不确定量化量化(Qu-Brats),并旨在评估和排列脑肿瘤多隔室分割的不确定性估计。该公制(1)奖励不确定性估计,对正确断言产生高置信度,以及在不正确的断言处分配低置信水平的估计数,(2)惩罚导致更高百分比的无关正确断言百分比的不确定性措施。我们进一步基准测试由14个独立参与的Qu-Brats 2020的分割不确定性,所有这些都参与了主要的Brats细分任务。总体而言,我们的研究结果证实了不确定性估计提供了分割算法的重要性和互补价值,因此突出了医学图像分析中不确定性量化的需求。我们的评估代码在HTTPS://github.com/ragmeh11/qu-brats公开提供。
translated by 谷歌翻译
目的:扫描间动作是$ r_1 $估计中的实质性源,可以预期在$ b_1 $字段更不均匀的地方增加7t。既定的校正方案不转化为7T,因为它需要体线圈参考。在这里,我们介绍了两种越优于既定方法的替代方案。由于它们计算它们不需要体内圈图像的相对敏感性。理论:所提出的方法使用线圈组合的幅度图像来获得相对线圈敏感性。第一方法通过简单的比率有效地计算相对敏感性;第二种通过拟合更复杂的生成模型。方法:使用变量翻转角度(VFA)方法计算$ R_1 $ MAP。在3T和7T中获取多个数据集,在VFA卷的获取之间,没有运动。 $ R_1 $ MAPS在没有修正的情况下,建议的校正和(在3T)与先前建立的校正方案。结果:在3T时,所提出的方法优于基线方法。扫描间运动人工制品也在7T下降。然而,如果还包含位置特定的发射现场效果,则再现性仅在没有运动条件的情况下融合。结论:提出的方法简化了$ R_1 $ MAPS的扫描间运动校正,并且适用于3T和7T,通常不可用。所有方法的开源代码都可公开可用。
translated by 谷歌翻译
仿真最近已成为深度加强学习,以安全有效地从视觉和预防性投入获取一般和复杂的控制政策的关键。尽管它与环境互动直接关系,但通常认为触觉信息通常不会被认为。在这项工作中,我们展示了一套针对触觉机器人和加强学习量身定制的模拟环境。提供了一种简单且快速的模拟光学触觉传感器的方法,其中高分辨率接触几何形状表示为深度图像。近端策略优化(PPO)用于学习所有考虑任务的成功策略。数据驱动方法能够将实际触觉传感器的当前状态转换为对应的模拟深度图像。此策略在物理机器人上实时控制循环中实现,以演示零拍摄的SIM-TO-REAL策略转移,以触摸感的几个物理交互式任务。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Likelihood-based deep generative models have recently been shown to exhibit pathological behaviour under the manifold hypothesis as a consequence of using high-dimensional densities to model data with low-dimensional structure. In this paper we propose two methodologies aimed at addressing this problem. Both are based on adding Gaussian noise to the data to remove the dimensionality mismatch during training, and both provide a denoising mechanism whose goal is to sample from the model as though no noise had been added to the data. Our first approach is based on Tweedie's formula, and the second on models which take the variance of added noise as a conditional input. We show that surprisingly, while well motivated, these approaches only sporadically improve performance over not adding noise, and that other methods of addressing the dimensionality mismatch are more empirically adequate.
translated by 谷歌翻译